ONNXModelHub

class ONNXModelHub(cacheDirectory: File) : ModelHub

This class provides methods for loading ONNX model to the local cacheDirectory.

Since

0.3

Parameters

cacheDirectory

The directory for all loaded models. It should be created before model loading and should have all required permissions for file writing/reading on your OS.

Constructors

ONNXModelHub
Link copied to clipboard
fun ONNXModelHub(cacheDirectory: File)

Functions

get
Link copied to clipboard
operator fun <T : InferenceModel, U : InferenceModel> get(modelType: ModelType<T, U>): U
loadModel
Link copied to clipboard
open override fun <T : InferenceModel, U : InferenceModel> loadModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): T

Loads model configuration without weights.

fun loadModel(modelType: OnnxModelType<*>, vararg executionProviders: ExecutionProvider, loadingMode: LoadingMode = LoadingMode.SKIP_LOADING_IF_EXISTS): OnnxInferenceModel

This method loads model from ONNX model zoo corresponding to the specified modelType. The loadingMode parameter defines the strategy of existing model use-case handling. If loadingMode is LoadingMode.SKIP_LOADING_IF_EXISTS and the model is already loaded, then the model will be loaded from the local cacheDirectory. If loadingMode is LoadingMode.OVERRIDE_IF_EXISTS the model will be overridden even if it is already loaded. executionProviders is a list of execution providers which will be used for model inference.

loadPretrainedModel
Link copied to clipboard
fun <T : InferenceModel, U : InferenceModel> loadPretrainedModel(modelType: ModelType<T, U>, loadingMode: LoadingMode): U

Properties

cacheDirectory
Link copied to clipboard
val cacheDirectory: File